FLEX: Extrinsic Parameters-free Multi-view 3D Human Motion Reconstruction

نویسندگان

چکیده

The increasing availability of video recordings made by multiple cameras has offered new means for mitigating occlusion and depth ambiguities in pose motion reconstruction methods. Yet, multi-view algorithms strongly depend on camera parameters, particularly relative transformations between the cameras. Such a dependency becomes hurdle once shifting to dynamic capture uncontrolled settings. We introduce FLEX (Free muLti-view rEconstruXion), an end-to-end extrinsic parameter-free model. is (dubbed ep-free) sense that it does not require parameters. Our key idea 3D angles skeletal parts, as well bone lengths, are invariant position. Hence, learning rotations lengths rather than locations allows predicting common values all views. network takes streams, learns fused deep features through novel fusion layer, reconstructs single consistent skeleton with temporally coherent joint rotations. demonstrate quantitative qualitative results three public data sets, multi-person synthetic streams captured compare our model state-of-the-art methods ep-free show absence we outperform them large margin while obtaining comparable when parameters available. Code, trained models, other materials available https://briang13.github.io/FLEX .

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

3D Adaptive Reconstruction of Human Motion From Multi-Sensors

In wireless body sensor networks, sensors may be installed on various body limbs to wirelessly collect body information for homecare services. The orientations and accelerations on each limb are different for various motion states. For example, each limb has different acceleration when walking versus running, and orientation when standing versus lying. According to the above information, the bo...

متن کامل

Evaluation of Multi-view 3D Reconstruction Software

A number of software solutions for reconstructing 3D models from multi-view image sets have been released in recent years. Based on an unordered collection of photographs, most of these solutions extract 3D models using structure-from-motion (SFM) algorithms. In this work, we compare the resulting 3D models qualitatively and quantitatively. To achieve these objectives, we have developed differe...

متن کامل

Multi-View Stereo 3D Edge Reconstruction

This paper presents a novel method for the reconstruction of 3D edges in multi-view stereo scenarios. Previous research in the field typically relied on video sequences and limited the reconstruction process to either straight linesegments, or edge-points, i.e., 3D points that correspond to image edges. We instead propose a system, denoted as EdgeGraph3D, able to recover both straight and curve...

متن کامل

A Genetic Algorithm based Optimization Method in 3D Solid Reconstruction from 2D Multi-View Engineering Drawings

There are mainly two categories for a 3D reconstruction from 2D drawings: B-Rep and CSG that both these methods have serious weaknesses despite being useful. B-Rep method which has been older and have wider function range is problematic because of high volume of calculations and vagueness in answers and CSG method has problem in terms of very limited range of volumes and drawings that it can an...

متن کامل

3D Reconstruction of Human Motion from Video

This paper presents a novel framework for 3D full body reconstruction of human motion from uncalibrated monocular video data. We first detect and track feature sets from video sequences by employing MSER and SURF feature detection techniques together with prior information obtained from the motion capture database. By deriving suitable feature sets from both video and motion capture data, we ar...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19827-4_11